Today we will talk about truncated singular value view composition, which will be our
first algorithm in order to apply to inverse problems which are opposed.
But before we can start we have to talk about the concept of minimum norm solutions, which
leads us to our definition 3.3. A vector, we call this vector L depending on F,
well maybe not in brackets, let's write it like this, so L depending on F in our N is
called least squares solution of the equation Au is equal to F where A is a matrix from
m times n, and F is vector or m, u is in Rn. If the minimum of the norm of Az minus F
is given by a Lf minus F. So this means that by varying this variable Z we can try to make
the mismatch between Az and F as small as possible in the sense that the norm of the
difference is minimal. So maybe it's impossible to hit the data F, for example if F is noisy
and the noise has driven us outside of the range of A, so maybe it's impossible to find
a Z such that Az is equal to F, but we can certainly minimize this quantity and the optimal
choice of Z or any optimal choice of Z we call Lf. And this is not necessarily unique,
so there might be multiple least square solutions, usually it's a whole vector space of least
square solutions, so it's any vector Lf which realizes this minimum here. And additionally
Lf is called the, so note that I'm saying a least square solution because there might
be several, but Lf is called the minimum norm solution if additionally, so minimum norm
solution is the keyword here, if additionally the norm of the vector Lf itself, so not of
its image via A compared to F, but if the norm of Lf in Rm, and these are Rm norms,
if that is the least possible norm of all least square solutions that we can find. So
least square solutions are not unique in general, but it is always possible to choose the least
square solution with the least norm, with the smallest norm of this set. And this shows
us how to define approximate solutions for ill-posed linear equations like that, so if
Hadamard's condition 1 is not fulfilled, that means that there is no solution to this problem,
and we are solving this by requiring that U does not solve this exactly, but that the
reconstruction minimizes the misfit between the effect of A on this reconstruction minus
F, and we can fix Hadamard's second condition if that is violated, which relates to non-uniqueness
of such a reconstruction, by making the choice unique by claiming that the reconstruction
is the minimum norm solution. But first let's talk about some examples. Example 1,
let A be this matrix, so it looks like a vector but it's actually a matrix, two rows, one column,
and F is the vector 1 minus 1. Well let's look at Au minus F, I'm going to look at the square
of the Euclidean norm because that's easier to think about. Au is U, U, well U is just a real
number, or R1 so to speak, so a vector with one component, so Au will be U, U minus 1 minus 1,
Euclidean norm squared, this is the norm of the vector U minus 1, U plus 1,
squared, and this is U minus 1 squared plus U plus 1 squared, and we can simplify this further by
multiplying this out, so this is U squared minus 2U plus 1 plus U squared plus 2U plus 1, so this is
plus 1, so this is 2U squared plus 2, and this is minimized, and remember this is what we want to do,
we want to find least square solutions, we want to minimize this least squares term,
and this is minimal if we choose U as the zero vector, so the one dimensional zero vector.
And there is no ambiguity here, so the least squares solution is actually unique, so this means that
LF is zero is the only possible choice, and is already unique,
because we can't do anything else, if we pick any other U, this misfit term will be larger, so
it will not give us a least square solution, so this is already unique,
so it's also the minimum norm solution, because if you have a set of cardinality 1, then
the representative of the set with the minimum norm will be just the singleton element which is in the set,
so this is already the minimum norm, sorry, minimum norm solution,
but so just to make sure, AU, what is that, AU is the zero vector, so maybe not AU, it's called ALF,
this is the zero vector, and this is not,
this is not the same as 1 minus 1, so the, you can also write, so indeed, AU equal to F,
has no solution at all, now we see that this is an ill-posed inverse problem, because there's no solution,
Presenters
Zugänglich über
Offener Zugang
Dauer
00:31:19 Min
Aufnahmedatum
2021-10-28
Hochgeladen am
2021-10-28 12:46:04
Sprache
en-US